分层多粒度分类(HMC)将分层多粒度标签分配给每个对象,专注于对标签层次结构进行编码,例如[“Albatross”,“Laysan Albatross”]从粗略级别进行。然而,细粒度的定义是主观的,并且图像质量可能会影响识别。因此,可以在层次结构的任何水平处观察样本,例如,例如,[“信天翁”]或[“白金贸易”,“Laysan Albatross”,并且在致动类别中辨别的示例在HMC的传统设置中通常被忽略。在本文中,我们研究了HMC问题,其中对象在层次结构的任何级别上标记。所提出的方法的基本设计源自两个动机:(1)学习在各个级别标记的物体应该转移级别之间的分层知识; (2)较低级别的类应继承与上级超类相关的属性。所提出的组合损失通过从树层次结构中定义的相关标签聚合信息来最大化观察到的地面真实标签的边际概率。如果观察到的标签处于叶片水平,则组合损失进一步施加了多级跨熵损失,以增加细粒度分类损失的重量。考虑到分层特征交互,我们提出了一个分层剩余网络(HRN),其中来自父级的粒度特定特征作为残留连接的特定特征被添加到儿童级别的特征。与最先进的HMC方法和精细的视觉分类(FGVC)方法相比,三种常用数据集的实验证明了我们的方法的有效性和利用标签层次结构的方法。
translated by 谷歌翻译
基于骨架的动作识别广泛用于各种区域,例如监视和人机相互作用。现有模型主要以监督方式学习,从而根据标签昂贵时可能是不可行的大规模标记数据。在本文中,我们提出了一种新的对比度重建表示学习网络(CRRL),其同时为无监督的基于骨架的动作识别捕获姿势和运动动力学。它主要由三部分组成:序列重建器,对比运动学习者和信息定影器。序列重建者通过重建学习从骨架坐标序列的表示,因此学习的表示倾向于聚焦在琐碎的姿势坐标上并且在运动学习中犹豫不决。为了增强运动的学习,对比运动学习者分别在从坐标序列和附加速度序列中学到的表示之间进行对比学习。最后,在信息定位器中,我们探讨了将序列重建器和对比运动学习者结合的各种策略,并建议通过基于知识蒸馏的融合策略同时捕获姿势和动作,从而将动作学习从对比运动学习者转移到序列中的序列重建者。在若干基准测试中,即NTU RGB + D 60,NTU RGB + D 120,CMU Mocap和NW-UCLA的实验结果证明了所提出的CRRL方法​​的承诺,到目前为止的现有方法。
translated by 谷歌翻译
虽然最先进的传统代表学习(TRL)模型在知识图形完成上显示竞争性能,但实体的嵌入物之间没有参数共享,并且实体之间的连接较弱。因此,提出了基于邻居聚合的表示学习(NARL)模型,其将实体的邻居中的信息编码到其嵌入中。然而,现有的NARL模型只能利用一个跳邻居,忽略多跳邻居中的信息,或者通过分层邻居聚合利用多跳邻居,销毁多跳邻居的完整性。在本文中,我们提出了一个名为RMNA的NARL模型,它通过规则挖掘算法获得和过滤HOWN规则,并使用所选的喇叭规则将有价值的多跳邻居转换为一个跳邻居,因此,有价值的信息中的信息通过聚合这些单跳邻居可以完全利用跳跃邻居。在实验中,我们将RMNA与最先进的TRL模型和NARL型号进行比较。结果表明,RMNA具有竞争性表现。
translated by 谷歌翻译
基于深度学习(DL)的高光谱图像(HSIS)去噪方法直接学习观察到的嘈杂图像和底层清洁图像之间的非线性映射。他们通常不考虑HSIS的物理特征,因此使他们缺乏了解他们的去噪机制的关键。为了解决这个问题,我们为HSI去噪提出了一种新颖的模型指导可解释网络。具体而言,完全考虑HSI的空间冗余,光谱低秩和光谱空间特性,我们首先建立基于子空间的多维稀疏模型。该模型首先将观察到的HSIS投入到低维正交子空间,然后表示具有多维字典的投影图像。之后,该模型展开到名为SMDS-Net的端到端网络中,其基本模块与模型的去噪程序无缝连接。这使得SMDS-Net传达清晰的物理意义,即学习HSIS的低级别和稀疏性。最后,通过端到端培训获得包括词典和阈值处理的所有关键变量。广泛的实验和综合分析证实了我们对最先进的HSI去噪方法的方法的去噪能力和可解释性。
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译
Face manipulation detection has been receiving a lot of attention for the reliability and security of the face images. Recent studies focus on using auxiliary information or prior knowledge to capture robust manipulation traces, which are shown to be promising. As one of the important face features, the face depth map, which has shown to be effective in other areas such as the face recognition or face detection, is unfortunately paid little attention to in literature for detecting the manipulated face images. In this paper, we explore the possibility of incorporating the face depth map as auxiliary information to tackle the problem of face manipulation detection in real world applications. To this end, we first propose a Face Depth Map Transformer (FDMT) to estimate the face depth map patch by patch from a RGB face image, which is able to capture the local depth anomaly created due to manipulation. The estimated face depth map is then considered as auxiliary information to be integrated with the backbone features using a Multi-head Depth Attention (MDA) mechanism that is newly designed. Various experiments demonstrate the advantage of our proposed method for face manipulation detection.
translated by 谷歌翻译
Implicit regularization is an important way to interpret neural networks. Recent theory starts to explain implicit regularization with the model of deep matrix factorization (DMF) and analyze the trajectory of discrete gradient dynamics in the optimization process. These discrete gradient dynamics are relatively small but not infinitesimal, thus fitting well with the practical implementation of neural networks. Currently, discrete gradient dynamics analysis has been successfully applied to shallow networks but encounters the difficulty of complex computation for deep networks. In this work, we introduce another discrete gradient dynamics approach to explain implicit regularization, i.e. landscape analysis. It mainly focuses on gradient regions, such as saddle points and local minima. We theoretically establish the connection between saddle point escaping (SPE) stages and the matrix rank in DMF. We prove that, for a rank-R matrix reconstruction, DMF will converge to a second-order critical point after R stages of SPE. This conclusion is further experimentally verified on a low-rank matrix reconstruction problem. This work provides a new theory to analyze implicit regularization in deep learning.
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
We propose, Monte Carlo Nonlocal physics-informed neural networks (MC-Nonlocal-PINNs), which is a generalization of MC-fPINNs in \cite{guo2022monte}, for solving general nonlocal models such as integral equations and nonlocal PDEs. Similar as in MC-fPINNs, our MC-Nonlocal-PINNs handle the nonlocal operators in a Monte Carlo way, resulting in a very stable approach for high dimensional problems. We present a variety of test problems, including high dimensional Volterra type integral equations, hypersingular integral equations and nonlocal PDEs, to demonstrate the effectiveness of our approach.
translated by 谷歌翻译